Incrementally Learning the Rules for Supervised Tasks: the Monk's Problems
نویسنده
چکیده
In previous experiments 4] 5] evolution of variable length mathematical expressions containing input variables was found to be useful in nding learning rules for simple and hard supervised tasks. However, hard learning problems required special attention in terms of their need for larger size codings of the potential solutions and their ability of generalisation over the testing set. This paper describes new experiments aiming to nd better solutions to these issues. Rather than evolution a hill climbing strategy with an incremental coding of potential solutions is used in discovering learning rules for the three Monks' problems. It is found that with this strategy larger solutions can easily be coded for. Although a better performance is achieved in training for the hard learning problems, the ability of the generalisation over the testing cases is observed to be poor.
منابع مشابه
Evolution of Learning Rules for Supervised Tasks I: Simple Learning Problems
Initial experiments with a genetic based encoding schema are presented as a potentially powerful tool to discover learning rules by means of evolution. Several simple supervised learning tasks are tested. The results indicate the potential of the encoding schema to discover learning rules for more complex and larger learning problems.
متن کاملEvolution of Learning Rules for Hard Learning Problems
Recent experiments with a genetic-based encoding schema are presented as a potentially useful tool in discovering learning rules by means of evolution. The representation strategy is similar to that used in genetic programming(GP) but it employs only a xed set of functions to solve a variety of problems. In this paper, three Monk's and parity problems are tested. The results indicate the useful...
متن کاملدستهبندی دادههای دوردهای با ابرمستطیل موازی محورهای مختصات
One of the machine learning tasks is supervised learning. In supervised learning we infer a function from labeled training data. The goal of supervised learning algorithms is learning a good hypothesis that minimizes the sum of the errors. A wide range of supervised algorithms is available such as decision tress, SVM, and KNN methods. In this paper we focus on decision tree algorithms. When we ...
متن کاملLearning Decision Lists Using Homogeneous Rules
A decision list is an ordered list of conjunctive rules (Rivest 1987). Inductive algorithms such as AQ and CN2 learn decision lists incrementally, one rule at a time. Such algorithms face the rule overlap problem | the classi cation accuracy of the decision list depends on the overlap between the learned rules. Thus, even though the rules are learned in isolation, they can only be evaluated in ...
متن کاملINTEGRATED ADAPTIVE FUZZY CLUSTERING (IAFC) NEURAL NETWORKS USING FUZZY LEARNING RULES
The proposed IAFC neural networks have both stability and plasticity because theyuse a control structure similar to that of the ART-1(Adaptive Resonance Theory) neural network.The unsupervised IAFC neural network is the unsupervised neural network which uses the fuzzyleaky learning rule. This fuzzy leaky learning rule controls the updating amounts by fuzzymembership values. The supervised IAFC ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1995